Skip to content

[Cursor] fix: Resolve test failures on master branch #131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 1, 2025

Conversation

grapeot
Copy link
Owner

@grapeot grapeot commented May 1, 2025

Corrected three test failures in tests/test_llm_api.py identified when running tests locally on the master branch:

  1. test_create_openai_client: Updated assertion to include the base_url parameter which is now passed during client creation.
  2. test_query_anthropic: Updated the expected model name in the assertion from claude-3-sonnet-20240229 to the current default claude-3-7-sonnet-20250219.
  3. test_query_gemini: Refactored the mock setup and assertions to correctly reflect the use of the chat session (start_chat and send_message) instead of the previous generate_content method.

Corrected three test failures in tests/test_llm_api.py identified
when running tests locally on the master branch:

1. test_create_openai_client: Updated assertion to include the
   `base_url` parameter which is now passed during client creation.
2. test_query_anthropic: Updated the expected model name in the
   assertion from `claude-3-sonnet-20240229` to the current default
   `claude-3-7-sonnet-20250219`.
3. test_query_gemini: Refactored the mock setup and assertions to
   correctly reflect the use of the chat session (`start_chat` and
   `send_message`) instead of the previous `generate_content` method.
@Copilot Copilot AI review requested due to automatic review settings May 1, 2025 04:51
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request fixes three test failures in the LLM API by updating assertion parameters and refactoring the mocking strategy to support the new chat session flow.

  • Updates the OpenAI client test to include base_url in its assertion.
  • Adjusts the expected model name in the Anthropic client test.
  • Refactors the Gemini client test to use the chat session (start_chat and send_message) instead of the obsolete generate_content method.
Comments suppressed due to low confidence (1)

tests/test_llm_api.py:299

  • [nitpick] Consider extracting the string literal "gemini-2.0-flash-exp" to a named constant to improve readability and ease future maintenance.
self.mock_gemini_client.GenerativeModel.assert_called_once_with("gemini-2.0-flash-exp")

Comment on lines +114 to +115
self.mock_gemini_model = MagicMock() # Mock for the GenerativeModel
self.mock_gemini_model.start_chat.return_value = self.mock_gemini_chat_session # Mock start_chat
Copy link
Preview

Copilot AI May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider initializing self.mock_gemini_model only once instead of reassigning it after configuring the chat session, to maintain clarity and reduce potential confusion.

Suggested change
self.mock_gemini_model = MagicMock() # Mock for the GenerativeModel
self.mock_gemini_model.start_chat.return_value = self.mock_gemini_chat_session # Mock start_chat
self.mock_gemini_model = MagicMock( # Mock for the GenerativeModel
start_chat=MagicMock(return_value=self.mock_gemini_chat_session) # Mock start_chat
)

Copilot uses AI. Check for mistakes.

@grapeot grapeot merged commit 125eef3 into master May 1, 2025
1 check passed
@grapeot grapeot deleted the fix/master-test-failures branch May 1, 2025 04:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant